Bin Packing Problem
   HOME

TheInfoList



OR:

The bin packing problem is an
optimization problem In mathematics, computer science and economics, an optimization problem is the problem of finding the ''best'' solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables ...
, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file
backup In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", ...
s in media and technology mapping in
FPGA A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturinghence the term '' field-programmable''. The FPGA configuration is generally specified using a hardware d ...
semiconductor chip An integrated circuit or monolithic integrated circuit (also referred to as an IC, a chip, or a microchip) is a set of electronic circuits on one small flat piece (or "chip") of semiconductor material, usually silicon. Large numbers of tiny ...
design. Computationally, the problem is
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
, and the corresponding
decision problem In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question of the input values. An example of a decision problem is deciding by means of an algorithm whe ...
- deciding if items can fit into a specified number of bins - is
NP-complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying ...
. Despite its worst-case hardness, optimal solutions to very large instances of the problem can be produced with sophisticated algorithms. In addition, many
approximation algorithms In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned s ...
exist. For example, the first fit algorithm provides a fast but often non-optimal solution, involving placing each item into the first bin in which it will fit. It requires '' Θ''(''n'' log ''n'') time, where ''n'' is the number of items to be packed. The algorithm can be made much more effective by first
sorting Sorting refers to ordering data in an increasing or decreasing manner according to some linear relationship among the data items. # ordering: arranging items in a sequence ordered by some criterion; # categorizing: grouping items with similar pro ...
the list of items into decreasing order (sometimes known as the first-fit decreasing algorithm), although this still does not guarantee an optimal solution, and for longer lists may increase the running time of the algorithm. It is known, however, that there always exists at least one ordering of items that allows first-fit to produce an optimal solution. There are many
variations Variation or Variations may refer to: Science and mathematics * Variation (astronomy), any perturbation of the mean motion or orbit of a planet or satellite, particularly of the moon * Genetic variation, the difference in DNA among individua ...
of this problem, such as 2D packing, linear packing, packing by weight, packing by cost, and so on. The bin packing problem can also be seen as a special case of the
cutting stock problem In operations research, the cutting-stock problem is the problem of cutting standard-sized pieces of stock material, such as paper rolls or sheet metal, into pieces of specified sizes while minimizing material wasted. It is an optimization problem ...
. When the number of bins is restricted to 1 and each item is characterised by both a volume and a value, the problem of maximizing the value of items that can fit in the bin is known as the
knapsack problem The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit a ...
. A variant of bin packing that occurs in practice is when items can share space when packed into a bin. Specifically, a set of items could occupy less space when packed together than the sum of their individual sizes. This variant is known as VM packing since when virtual machines (VMs) are packed in a server, their total memory requirement could decrease due to
pages Page most commonly refers to: * Page (paper), one side of a leaf of paper, as in a book Page, PAGE, pages, or paging may also refer to: Roles * Page (assistance occupation), a professional occupation * Page (servant), traditionally a young mal ...
shared by the VMs that need only be stored once. If items can share space in arbitrary ways, the bin packing problem is hard to even approximate. However, if the space sharing fits into a hierarchy, as is the case with memory sharing in virtual machines, the bin packing problem can be efficiently approximated. Another variant of bin packing of interest in practice is the so-called
online In computer technology and telecommunications, online indicates a state of connectivity and offline indicates a disconnected state. In modern terminology, this usually refers to an Internet connection, but (especially when expressed "on line" ...
bin packing. Here the items of different volume are supposed to arrive sequentially, and the decision maker has to decide whether to select and pack the currently observed item, or else to let it pass. Each decision is without recall. In contrast, offline bin packing allows rearranging the items in the hope of achieving a better packing once additional items arrive. This of course requires additional storage for holding the items to be rearranged.


Formal statement

In ''
Computers and Intractability ''Computers and Intractability: A Guide to the Theory of NP-Completeness'' is a textbook by Michael Garey and David S. Johnson. It was the first book exclusively on the theory of NP-completeness and computational intractability. The book featu ...
'' Garey and Johnson list the bin packing problem under the reference R1 They define its decision variant as follows. Instance: Finite set I of items, a size s(i) \in \mathbb^+ for each i \in I, a positive integer bin capacity B, and a positive integer K. Question: Is there a partition of I into
disjoint sets In mathematics, two sets are said to be disjoint sets if they have no element in common. Equivalently, two disjoint sets are sets whose intersection is the empty set.. For example, and are ''disjoint sets,'' while and are not disjoint. A ...
I_1,\dots, I_K such that the sum of the sizes of the items in each I_j is B or less? Note that in the literature often an equivalent notation is used, where B = 1 and s(i) \in \mathbb \cap (0,1] for each i \in I. Furthermore, research is mostly interested in the optimization variant, which asks for the smallest possible value of K. A solution is ''optimal'' if it has minimal K. The K-value for an optimal solution for a set of items I is denoted by \mathrm(I) or just \mathrm if the set of items is clear from the context. A possible Integer programming, integer linear programming formulation of the problem is: where y_j = 1 if bin j is used and x_ = 1 if item i is put into bin j.


Hardness of bin packing

The bin packing problem is
strongly NP-complete In computational complexity, strong NP-completeness is a property of computational problems that is a special case of NP-completeness. A general computational problem may have numerical parameters. For example, the input to the bin packing problem ...
. This can be proven by reducing the strongly NP-complete
3-partition problem The 3-partition problem is a strongly NP-complete problem in computer science. The problem is to decide whether a given multiset of integers can be partitioned into triplets that all have the same sum. More precisely: * The input to the proble ...
to bin packing. Furthermore, there can be no
approximation algorithm In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned sol ...
with absolute approximation ratio smaller than \tfrac 3 2 unless \mathsf = \mathsf. This can be proven by a reduction from the
partition problem In number theory and computer science, the partition problem, or number partitioning, is the task of deciding whether a given multiset ''S'' of positive integers can be partitioned into two subsets ''S''1 and ''S''2 such that the sum of the numbe ...
: given an instance of Partition where the sum of all input numbers is 2T, construct an instance of bin-packing in which the bin size is . If there exists an equal partition of the inputs, then the optimal packing needs 2 bins; therefore, every algorithm with approximation ratio smaller than must return less than 3 bins, which must be 2 bins. In contrast, if there is no equal partition of the inputs, then the optimal packing needs at least 3 bins. On the other hand, bin packing is solvable in pseudo-polynomial time for any fixed number of bins , and solvable in polynomial time for any fixed bin capacity .


Approximation algorithms for bin packing

To measure the performance of an approximation algorithm there are two approximation ratios considered in the literature. For a given list of items L the number A(L) denotes the number of bins used when algorithm A is applied to list L, while \mathrm(L) denotes the optimum number for this list. The absolute worst-case performance ratio R_A for an algorithm A is defined as : R_A \equiv \inf\. On the other hand, the asymptotic worst-case ratio R_A^ is defined as : R_A^\infty \equiv \inf\. Equivalently, R_A^ is the smallest number such that, for some constant ''K,'' for all lists ''L:'' : A(L) \leq R^_A \cdot \mathrm(L) + K. Additionally, one can restrict the lists to those for which all items have a size of at most \alpha. For such lists, the bounded size performance ratios are denoted as R_A(\text\leq \alpha) and R_A^\infty(\text\leq \alpha). Approximation algorithms for bin packing can be classified into two categories: # Online heuristics, that consider the items in a given order and place them one by one inside the bins. These heuristics are also applicable to the online version of this problem. # Offline heuristics, that modify the given list of items e.g. by sorting the items by size. These algorithms are no longer applicable to the online variant of this problem. However, they have an improved approximation guarantee while maintaining the advantage of their small time-complexity. A sub-category of offline heuristics is asymptotic approximation schemes. These algorithms have an approximation guarantee of the form (1+\varepsilon)\mathrm(L) + C for some constant that may depend on 1/\varepsilon. For an arbitrarily large \mathrm(L) these algorithms get arbitrarily close to \mathrm(L). However, this comes at the cost of a (drastically) increased time complexity compared to the heuristical approaches.


Online heuristics

In the
online In computer technology and telecommunications, online indicates a state of connectivity and offline indicates a disconnected state. In modern terminology, this usually refers to an Internet connection, but (especially when expressed "on line" ...
version of the bin packing problem, the items arrive one after another and the (irreversible) decision where to place an item has to be made before knowing the next item or even if there will be another one. A diverse set of offline and online heuristics for bin-packing have been studied by David S. Johnson on his Ph.D. thesis.


Single-class algorithms

There are many simple algorithms that use the following general scheme: * For each item in the input list: *# If the item fits into one of the currently open bins, then put it in one of these bins; *# Otherwise, open a new bin and put the new item in it. The algorithms differ in the criterion by which they choose the open bin for the new item in step 1 (see the linked pages for more information): * Next Fit (NF) always keeps a single open bin. When the new item does not fit into it, it closes the current bin and opens a new bin. Its advantage is that it is a bounded-space algorithm, since it only needs to keep a single open bin in memory. Its disadvantage is that its asymptotic approximation ratio is 2. In particular, NF(L) \leq 2 \cdot \mathrm(L) -1 , and for each N \in \mathbb there exists a list such that \mathrm(L) = N and NF(L) = 2 \cdot \mathrm(L) -2. Its asymptotic approximation ratio can be somewhat improved based on the item sizes: R_^\infty(\text\leq\alpha) \leq 2 for all \alpha \geq 1/2 and R_^\infty(\text\leq\alpha) \leq 1/(1-\alpha) for all \alpha \leq 1/2. For each algorithm that is an AnyFit-algorithm it holds that R_^(\text\leq\alpha)\leq R_^(\text\leq\alpha). * Next-k-Fit (NkF) is a variant of Next-Fit, but instead of keeping only one bin open, the algorithm keeps the last bins open and chooses the first bin in which the item fits. Therefore, it is called a ''k-bounded space'' algorithm. For k\geq 2 the NkF delivers results that are improved compared to the results of NF, however, increasing to constant values larger than improves the algorithm no further in its worst-case behavior. If algorithm is an AlmostAnyFit-algorithm and m = \lfloor 1/\alpha\rfloor \geq 2 then R_^(\text\leq\alpha)\leq R_^(\text\leq\alpha) = 1+1/m. * First-Fit (FF) keeps all bins open, in the order in which they were opened. It attempts to place each new item into the ''first'' bin in which it fits. Its approximation ratio is FF(L) \leq \lfloor 1.7\mathrm\rfloor, and there is a family of input lists for which FF(L) matches this bound. * Best-Fit (BF), too, keeps all bins open, but attempts to place each new item into the bin with the ''maximum'' load in which it fits. Its approximation ratio is identical to that of FF, that is: BF(L) \leq \lfloor 1.7\mathrm\rfloor, and there is a family of input lists for which BF(L) matches this bound. *Worst-Fit (WF) attempts to place each new item into the bin with the ''minimum'' load. It can behave as badly as Next-Fit, and will do so on the worst-case list for that NF(L) = 2 \cdot \mathrm(L) -2 . Furthermore, it holds that R_^(\text\leq \alpha) = R_^(\text\leq \alpha). Since WF is an AnyFit-algorithm, there exists an AnyFit-algorithm such that R_^(\alpha) = R_^(\alpha). * Almost Worst-Fit (AWF) attempts to place each new item inside the ''second most empty'' open bin (or emptiest bin if there are two such bins). If it does not fit, it tries the most empty one. It has an asymptotic worst-case ratio of \tfrac . In order to generalize these results, Johnson introduced two classes of online heuristics called ''any-fit algorithm'' and ''almost-any-fit'' algorithm: * In an AnyFit (AF) algorithm, if the current nonempty bins are ''B''1,...,''Bj'', then the current item will not be packed into ''B''''j''+1 unless it does not fit in any of ''B''1,...,''Bj''. The FF, WF, BF and AWF algorithms satisfy this condition. Johnson proved that, for any AnyFit algorithm A and any \alpha: *:R_^(\alpha) \leq R_^(\alpha) \leq R_^(\alpha). * In an AlmostAnyFit (AAF) algorithm, if the current nonempty bins are ''B''1,...,''Bj'', and of these bins, ''Bk'' is the unique bin with the smallest load, then the current item will not be packed into ''Bk'', unless it does not fit into any of the bins to its left. The FF, BF and AWF algorithms satisfy this condition, but WF does not. Johnson proved that, for any AAF algorithm A and any : *:R_^(\alpha) = R_^(\alpha) In particular: R_^ = 1.7 .


Refined algorithms

Better approximation ratios are possible with heuristics that are not AnyFit. These heuristics usually keep several classes of open bins, devoted to items of different size-ranges (see the linked pages for more information): * Refined-first-fit bin packing (RFF) partitions the item sizes into four ranges: \left(\frac 1 2,1\right], \left(\frac 2 5, \frac 1 2\right], \left(\frac 1 3, \frac 2 5\right], and \left(0, \frac 1 3\right]. Similarly, the bins are categorized into four classes. The next item i \in L is first assigned to its corresponding class. Inside that class, it is assigned to a bin using First-fit bin packing, first-fit. Note that this algorithm is not an Any-Fit algorithm since it may open a new bin despite the fact that the current item fits inside an open bin. This algorithm was first presented by Andrew Chi-Chih Yao, who proved that it has an approximation guarantee of RFF(L) \leq (5/3) \cdot \mathrm(L) +5 and presented a family of lists L_k with RFF(L_k) = (5/3)\mathrm(L_k) +1/3 for \mathrm(L) = 6k+1. * Harmonic-k partitions the interval of sizes (0,1] based on a Harmonic progression (mathematics), Harmonic progression into k-1 pieces I_j := \left(\frac 1 , \frac 1 j\right] for 1\leq j < k and I_k := \left(0, \frac 1 k\right] such that \bigcup_^k I_j = (0,1]. This algorithm was first described by Lee and Lee. It has a time complexity of \mathcal(, L, \log(, L, )) and at each step, there are at most open bins that can be potentially used to place items, i.e., it is a -bounded space algorithm. For k \rightarrow \infty, its approximation ratio satisfies R_^ \approx 1.6910, and it is asymptotically tight. * Harmonic bin packing, Refined-harmonic combines ideas from Harmonic-k with ideas from Refined-First-Fit. It places the items larger than \tfrac 1 3 similar as in Refined-First-Fit, while the smaller items are placed using Harmonic-k. The intuition for this strategy is to reduce the huge waste for bins containing pieces that are just larger than \tfrac 1 2. This algorithm was first described by Lee and Lee. They proved that for k = 20 it holds that R^\infty_ \leq 373/228.


General lower bounds for online algorithms

Yao proved 1980 that there can be no online algorithm with asymptotic competitive ratio smaller than \tfrac 3 2. Brown and Liang improved this bound to . Afterward, this bound was improved to by Vliet. In 2012, this lower bound was again improved by Békési and Galambos to \tfrac \approx 1.54037.


Comparison table


Offline algorithms

In the offline version of bin packing, the algorithm can see all the items before starting to place them into bins. This allows to attain improved approximation ratios.


Multiplicative approximation

The simplest technique used by offline algorithms is: * Ordering the input list by descending size; * Run an online algorithm on the ordered list. Johnson proved that any AnyFit algorithm A that runs on a list ordered by descending size has an asymptotic approximation ratio of
1.22 \approx \frac \leq R^_A \leq \frac = 1.25.
Some algorithms in this family are (see the linked pages for more information): * First-fit-decreasing (FFD) - orders the items by descending size, then calls First-Fit. Its approximation ratio is FFD(I) = \frac9 \mathrm(I) + \frac 6 9, and this is tight. *
Next-fit-decreasing Next-fit-decreasing (NFD) is an algorithm for bin packing The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way ...
(NFD) - orders the items by descending size, then calls Next-Fit. Its approximate ratio is slightly less than 1.7 in the worst case. It has also been analyzed probabilistically. Next-Fit packs a list and its inverse into the same number of bins. Therefore, Next-Fit-Increasing has the same performance as Next-Fit-Decreasing. * Modified first-fit-decreasing (MFFD) - improves on FFD for items larger than half a bin by classifying items by size into four size classes large, medium, small, and tiny, corresponding to items with size > 1/2 bin, > 1/3 bin, > 1/6 bin, and smaller items respectively. Its approximation guarantee is MFFD(I) \leq \frac \mathrm(I) + 1. Fernandez de la Vega and Lueker presented a PTAS for bin packing. For every \varepsilon>0, their algorithm finds a solution with size at most (1+\varepsilon)\mathrm + 1 and runs in time \mathcal(n\log(1/\varepsilon)) + \mathcal_(1), where \mathcal_(1) denotes a function only dependent on 1/\varepsilon. For this algorithm, they invented the method of ''adaptive input rounding'': the input numbers are grouped and rounded up to the value of the maximum in each group. This yields an instance with a small number of different sizes, which can be solved exactly using the
configuration linear program The configuration linear program (configuration-LP) is a particular linear programming used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock problem.Gilmore P. C., R. E. Gomory (1961). A linear ...
.


Additive approximation

The Karmarkar-Karp bin packing algorithm finds a solution with size at most \mathrm + \mathcal(\log^2(\mathrm)), and runs in time polynomial in (the polynomial has a high degree - at least 8). Rothvoss presented an algorithm that generates a solution with at most \mathrm + \mathcal(\log(\mathrm)\cdot \log\log(\mathrm)) bins. Hoberg and Rothvoss improved this algorithm to generate a solution with at most \mathrm + \mathcal(\log(\mathrm)) bins. The algorithm is randomized, and its running-time is polynomial in .


Comparison table


Exact algorithms

Martello and Toth developed an exact algorithm for the 1-dimensional bin-packing problem, called MTP. A faster alternative is the Bin Completion algorithm proposed by Korf in 2002 and later improved.R. E. Korf (2003),
An improved algorithm for optimal bin packing
'. Proceedings of the International Joint Conference on Artificial Intelligence, (pp. 1252–1258)
A further improvement was presented by Schreiber and Korf in 2013. The new Improved Bin Completion algorithm is shown to be up to five orders of magnitude faster than Bin Completion on non-trivial problems with 100 items, and outperforms the BCP (branch-and-cut-and-price) algorithm by Belov and Scheithauer on problems that have fewer than 20 bins as the optimal solution. Which algorithm performs best depends on problem properties like number of items, optimal number of bins, unused space in the optimal solution and value precision.


Small number of different sizes

A special case of bin packing is when there is a small number ''d'' of different item sizes. There can be many different items of each size. This case is also called '' high-multiplicity bin packing'', and It admits more efficient algorithms than the general problem.


Cardinality constraints on the bins

There is a variant of bin packing in which there are cardinality constraints on the bins: each bin can contain at most ''k'' items, for some fixed integer ''k''. * Krause, Shen and Schwetman introduce this problem as a variant of
optimal job scheduling Optimal job scheduling is a class of optimization problems related to scheduling. The inputs to such problems are a list of '' jobs'' (also called ''processes'' or ''tasks'') and a list of ''machines'' (also called ''processors'' or ''workers''). Th ...
: a computer has some ''k'' processors. There are some ''n'' jobs that take unit time (1), but have different memory requirements. Each time-unit is considered a single bin. The goal is to use as few bins (=time units) as possible, while ensuring that in each bin, at most ''k'' jobs run. They present several heuristic algorithms that find a solution with at most 2 \mathrm bins. * Kellerer and Pferschy present an algorithm with run-time O(n^2 \log), that finds a solution with at most \left\lceil\frac\mathrm\right\rceil bins. Their algorithm performs a
binary search In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the ...
for OPT. For every searched value ''m'', it tries to pack the items into 3''m''/2 bins.


Non-additive functions

There are various ways to extend the bin-packing model to more general cost and load functions: * Anily, Bramel and Simchi-Levi study a setting where the cost of a bin is a
concave function In mathematics, a concave function is the negative of a convex function. A concave function is also synonymously called concave downwards, concave down, convex upwards, convex cap, or upper convex. Definition A real-valued function f on an ...
of the number of items in the bin. The objective is to minimize the total ''cost'' rather than the number of bins. They show that next-fit-increasing bin packing attains an absolute worst-case approximation ratio of at most 7/4, and an asymptotic worst-case ratio 1.691 for any concave and monotone cost function. * Cohen, Keller, Mirrokni and Zadimoghaddam study a setting where the size of the items is not known in advance, but it is a
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
. This is particularly common in
cloud computing Cloud computing is the on-demand availability of computer system resources, especially data storage ( cloud storage) and computing power, without direct active management by the user. Large clouds often have functions distributed over mu ...
environments. While there is an upper bound on the amount of resources a certain user needs, most users use much less than the capacity. Therefore, the cloud manager may gain a lot by slight overcommitment. This induces a variant of bin packing with chance constraints: the probability that the sum of sizes in each bin is at most ''B'' should be at least ''p'', where ''p'' is a fixed constant (standard bin packing corresponds to ''p''=1). They show that, under mild assumptions, this problem is equivalent to a ''submodular bin packing'' problem, in which the "load" in each bin is not equal to the sum of items, but to a certain
submodular function In mathematics, a submodular set function (also known as a submodular function) is a set function whose value, informally, has the property that the difference in the incremental value of the function that a single element makes when added to an ...
of it.


Related problems

In the bin packing problem, the ''size'' of the bins is fixed and their ''number'' can be enlarged (but should be as small as possible). In contrast, in the
multiway number partitioning In computer science, multiway number partitioning is the problem of partitioning a multiset of numbers into a fixed number of subsets, such that the sums of the subsets are as similar as possible. It was first presented by Ronald Graham in 1969 in t ...
problem, the ''number'' of bins is fixed and their ''size'' can be enlarged. The objective is to find a partition in which the bin sizes are as nearly equal is possible (in the variant called multiprocessor scheduling problem or minimum
makespan In operations research, the makespan of a project is the length of time that elapses from the start of work to the end. This type of multi-mode resource constrained project scheduling problem (MRCPSP) seeks to create the shortest logical project sc ...
problem, the goal is specifically to minimize the size of the largest bin). In the inverse bin packing problem, both the number of bins and their sizes are fixed, but the item sizes can be changed. The objective is to achieve the minimum perturbation to the item size vector so that all the items can be packed into the prescribed number of bins. In the maximum resource bin packing problem, the goal is to ''maximize'' the number of bins used, such that, for some ordering of the bins, no item in a later bin fits in an earlier bin. In a dual problem, the number of bins is fixed, and the goal is to minimize the total number or the total size of items placed into the bins, such that no remaining item fits into an unfilled bin. In the bin covering problem, the bin size is bounded ''from below'': the goal is to ''maximize'' the number of bins used such that the total size in each bin is at least a given threshold. In the fair indivisible chore allocation problem (a variant of
fair item allocation Fair item allocation is a kind of a fair division problem in which the items to divide are ''discrete'' rather than continuous. The items have to be divided among several partners who value them differently, and each item has to be given as a whol ...
), the items represents chores, and there are different people each of whom attributes a different difficulty-value to each chore. The goal is to allocate to each person a set of chores with an upper bound on its total difficulty-value (thus, each person corresponds to a bin). Many techniques from bin packing are used in this problem too. In the guillotine cutting problem, both the items and the "bins" are two-dimensional rectangles rather than one-dimensional numbers, and the items have to be cut from the bin using end-to-end cuts. In the selfish bin packing problem, each item is a player who wants to minimize its cost. There is also a variant of bin packing in which the cost that should be minimized is not the number of bins, but rather a certain
concave function In mathematics, a concave function is the negative of a convex function. A concave function is also synonymously called concave downwards, concave down, convex upwards, convex cap, or upper convex. Definition A real-valued function f on an ...
of the number of items in each bin. Other variants are two-dimensional bin packing, three-dimensional bin packing, bin packing with delivery,


Resources


BPPLIB
- a library of surveys, codes, benchmarks, generators, solvers, and bibliography.


Implementations

* Online:
visualization of heuristics for 1D and 2D bin packing
*Python: Th
prtpy package
contains code for various number-partitioning, bin-packing and bin-covering algorithms. Th
binpacking package
contains greedy algorithms for solving two typical bin packing problems. *C++: Th
bin-packing package
contains various greedy algorithms as well as test data. Th
OR-tools package
contains bin packing algorithms in C++, with wrappers in Python, C# and Java. * C

*PHP: ttp://www.phpclasses.org/package/2027-PHP-Pack-files-without-exceeding-a-given-size-limit.html PHP Class to pack files without exceeding a given size limit*Haskell
An implementation of several bin packing heuristics
including FFD and MFFD. *C
Fpart : open-source command-line tool to pack files (C, BSD-licensed)
*C#
Bin Packing and Cutting Stock Solver
*Java
caparf - Cutting And Packing Algorithms Research Framework
including a number of bin packing algorithms and test data.


References

{{DEFAULTSORT:Bin Packing Problem Optimization algorithms and methods Strongly NP-complete problems *